49 resultados para high-performance computing, computational modelling, multi-scale simulation, Visible Cell, orthologue mapping, systems biology

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational grids are increasingly being used for executing large multi-component scientific applications. The most widely reported advantages of application execution on grids are the performance benefits, in terms of speeds, problem sizes or quality of solutions, due to increased number of processors. We explore the possibility of improved performance on grids without increasing the application’s processor space. For this, we consider grids with multiple batch systems. We explore the challenges involved in and the advantages of executing long-running multi-component applications on multiple batch sites with a popular multi-component climate simulation application, CCSM, as the motivation.We have performed extensive simulation studies to estimate the single and multi-site execution rates of the applications for different system characteristics.Our experiments show that in many cases, multiple batch executions can have better execution rates than a single site execution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The high temperature strength of alloys with (gamma +gamma') microstructure is primarily due to the resistance of the ordered precipitate to cutting by matrix dislocations. Such shearing requires higher stresses since it involves the creation of a planar fault. Planar fault energy is known to be dependent on composition. This implies that the composition on the fault may be different from that in the bulk for energetic reasons. Such segregation (or desegregation) of specific alloying elements to the fault may result in Suzuki strengthening which has not been explored extensively in these systems. In this work, segregation (or desegregation) of alloying elements to planar faults was studied computationally in Ni-3(Al, Ti) and Co-3(W, Al) type gamma' precipitates. The composition dependence of APB energy and heat of mixing were evaluated from first principle electronic structure calculations. A phase field model incorporating the first principles results, was used to simulate the motion of an extended superdislocation under stress concurrently with composition evolution. Results reveal that in both systems, significant (de) segregation occurs on equilibration. On application of stress, solutes were dragged along with the APB in some cases. Additionally, it was also noted the velocity of the superdislocation under an applied stress is strongly dependent on atomic mobility (i. e. diffusivity).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on dynamic inversion, a relatively straightforward approach is presented in this paper for nonlinear flight control design of high performance aircrafts, which does not require the normal and lateral acceleration commands to be first transferred to body rates before computing the required control inputs. This leads to substantial improvement of the tracking response. Promising results are obtained from six degree-offreedom simulation studies of F-16 aircraft, which are found to be superior as compared to an existing approach (which is also based on dynamic inversion). The new approach has two potential benefits, namely reduced oscillatory response (including elimination of non-minimum phase behavior) and reduced control magnitude. Next, a model-following neuron-adaptive design is augmented the nominal design in order to assure robust performance in the presence of parameter inaccuracies in the model. Note that in the approach the model update takes place adaptively online and hence it is philosophically similar to indirect adaptive control. However, unlike a typical indirect adaptive control approach, there is no need to update the individual parameters explicitly. Instead the inaccuracy in the system output dynamics is captured directly and then used in modifying the control. This leads to faster adaptation, which helps in stabilizing the unstable plant quicker. The robustness study from a large number of simulations shows that the adaptive design has good amount of robustness with respect to the expected parameter inaccuracies in the model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human Leukocyte Antigen (HLA) plays an important role, in presenting foreign pathogens to our immune system, there by eliciting early immune responses. HLA genes are highly polymorphic, giving rise to diverse antigen presentation capability. An important factor contributing to enormous variations in individual responses to diseases is differences in their HLA profiles. The heterogeneity in allele specific disease responses decides the overall disease epidemiological outcome. Here we propose an agent based computational framework, capable of incorporating allele specific information, to analyze disease epidemiology. This framework assumes a SIR model to estimate average disease transmission and recovery rate. Using epitope prediction tool, it performs sequence based epitope detection for a given the pathogenic genome and derives an allele specific disease susceptibility index depending on the epitope detection efficiency. The allele specific disease transmission rate, that follows, is then fed to the agent based epidemiology model, to analyze the disease outcome. The methodology presented here has a potential use in understanding how a disease spreads and effective measures to control the disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report a pH-dependent conformational transition in short, defined homopolymeric deoxyadenosines (dA(15)) from a single helical structure with stacked nucleobases at neutral pH to a double-helical, parallel-stranded duplex held together by AH-HA base pairs at acidic pH. Using native PAGE, 2D NMR, circular dichroism (CD) and fluorescence spectroscopy, we have characterized the two different pH dependent forms of dA(15). The pH-triggered transition between the two defined helical forms of dA(15) is characterized by CD and fluorescence. The kinetics of this conformational switch is found to occur on a millisecond time scale. This robust, highly reversible, pH-induced transition between the two well-defined structured states of dA(15)represents a new molecular building block for the construction of quick-response, pH-switchable architectures in structural DNA nanotechnology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this letter, we propose the design and simulation study of a novel transistor, called HFinFET, which is a hybrid of an HEMT and a FinFET, to obtain excellent performance and good OFF-state control. Followed by the description of the design, 3-D device simulation has been performed to predict the characteristics of the device. The device has been benchmarked against published state of the art HEMT as well as planar and nonplanar Si n-MOSFET data of comparable gate length using standard benchmarking techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clustered architecture processors are preferred for embedded systems because centralized register file architectures scale poorly in terms of clock rate, chip area, and power consumption. Although clustering helps by improving clock speed, reducing energy consumption of the logic, and making the design simpler, it introduces extra overheads by way of inter-cluster communication. This communication happens over long global wires which leads to delay in execution and significantly high energy consumption.In this paper, we propose a new instruction scheduling algorithm that exploits scheduling slacks of instructions and communication slacks of data values together to achieve better energy-performance trade-offs for clustered architectures with heterogeneous interconnect. Our instruction scheduling algorithm achieves 35% and 40% reduction in communication energy, whereas the overall energy-delay product improves by 4.5% and 6.5% respectively for 2 cluster and 4 cluster machines with marginal increase (1.6% and 1.1%) in execution time. Our test bed uses the Trimaran compiler infrastructure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wave pipelining is a design technique for increasing the throughput of a digital circuit or system without introducing pipelining registers between adjacent combinational logic blocks in the circuit/system. However, this requires balancing of the delays along all the paths from the input to the output which comes the way of its implementation. Static CMOS is inherently susceptible to delay variation with input data, and hence, receives a low priority for wave pipelined digital design. On the other hand, ECL and CML, which are amenable to wave pipelining, lack the compactness and low power attributes of CMOS. In this paper we attempt to exploit wave pipelining in CMOS technology. We use a single generic building block in Normal Process Complementary Pass Transistor Logic (NPCPL), modeled after CPL, to achieve equal delay along all the propagation paths in the logic structure. An 8×8 b multiplier is designed using this logic in a 0.8 ?m technology. The carry-save multiplier architecture is modified suitably to support wave pipelining, viz., the logic depth of all the paths are made identical. The 1 mm×0.6 mm multiplier core supports a throughput of 400 MHz and dissipates a total power of 0.6 W. We develop simple enhancements to the NPCPL building blocks that allow the multiplier to sustain throughputs in excess of 600 MHz. The methodology can be extended to introduce wave pipelining in other circuits as well

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biomedical engineering solutions like surgical simulators need High Performance Computing (HPC) to achieve real-time performance. Graphics Processing Units (GPUs) offer HPC capabilities at low cost and low power consumption. In this work, it is demonstrated that a liver which is discretized by about 2500 finite element nodes, can be graphically simulated in realtime, by making use of a GPU. Present work takes into consideration the time needed for the data transfer from CPU to GPU and back from GPU to CPU. Although behaviour of liver is very complicated, present computer simulation assumes linear elastostatics. One needs to use the commercial software ANSYS to obtain the global stiffness matrix of the liver. Results show that GPUs are useful for the real-time graphical simulation of liver, which in turn is needed in simulators that are used for training surgeons in laparoscopic surgery. Although the computer simulation should involve rendering also, neither rendering, nor the time needed for rendering and displaying the liver on a screen, is considered in the present work. The present work is just a demonstration of a concept; the concept is not really implemented and validated. Future work is to develop software which can accomplish real-time and very realistic graphical simulation of liver, with rendered image of liver on the screen changing in real-time according to the position of the surgical tool tip approximated as the mouse cursor in 3D.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimation of creep and shrinkage are critical in order to compute loss of prestress with time in order to compute leak tightness and assess safety margins available in containment structures of nuclear power plants. Short-term creep and shrinkage experiments have been conducted using in-house test facilities developed specifically for the present research program on 35 and 45 MPa normal concrete and 25 MPa heavy density concrete. The extensive experimental program for creep, has cylinders subject to sustained levels of load typically for several days duration (till negligible strain increase with time is observed in the creep specimen), to provide the total creep strain versus time curves for the two normal density concrete grades and one heavy density concrete grade at different load levels, different ages at loading, and at different relative humidity’s. Shrinkage studies on prism specimen for concrete of the same mix grades are also being studied. In the first instance, creep and shrinkage prediction models reported in the literature has been used to predict the creep and shrinkage levels in subsequent experimental data with acceptable accuracy. While macro-scale short experiments and analytical model development to estimate time dependent deformation under sustained loads over long term, accounting for the composite rheology through the influence of parameters such as the characteristic strength, age of concrete at loading, relative humidity, temperature, mix proportion (cement: fine aggregate: coarse aggregate: water) and volume to surface ratio and the associated uncertainties in these variables form one part of the study, it is widely believed that strength, early age rheology, creep and shrinkage are affected by the material properties at the nano-scale that are not well established. In order to understand and improve cement and concrete properties, investigation of the nanostructure of the composite and how it relates to the local mechanical properties is being undertaken. While results of creep and shrinkage obtained at macro-scale and their predictions through rheological modeling are satisfactory, the nano and micro indenting experimental and analytical studies are presently underway. Computational mechanics based models for creep and shrinkage in concrete must necessarily account for numerous parameters that impact their short and long term response. A Kelvin type model with several elements representing the influence of various factors that impact the behaviour is under development. The immediate short term deformation (elastic response), effects of relative humidity and temperature, volume to surface ratio, water cement ratio and aggregate cement ratio, load levels and age of concrete at loading are parameters accounted for in this model. Inputs to this model, such as the pore structure and mechanical properties at micro/nano scale have been taken from scanning electron microscopy and micro/nano-indenting of the sample specimen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational grids with multiple batch systems (batch grids) can be powerful infrastructures for executing long-running multicomponent parallel applications. In this paper, we have constructed a middleware framework for executing such long-running applications spanning multiple submissions to the queues on multiple batch systems. We have used our framework for execution of a foremost long-running multi-component application for climate modeling, the Community Climate System Model (CCSM). Our framework coordinates the distribution, execution, migration and restart of the components of CCSM on the multiple queues where the component jobs of the different queues can have different queue waiting and startup times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a new straight forward technique based on dynamic inversion, which is applied for tracking the pilot commands in high performance aircrafts.Pilot commands assumed in longitudinal mode are normal acceleration and total velocity(while roll angle and lateral acceleration are maintained at zero). In lateral mode, roll rate and total velocity are used as pilot commands (while climb rate and lateral acceleration are maintained at zero). Ensuring zero lateral acceleration leads to a better turn co-ordination. A six degree-of-freedom model of F-16 aircraft is used for both control design as well as simulation studies. Promising results are obtained which are found to be superior as compared to an existing approach (which is also based on dynamic inversion). The new approach has two potential benefits, namely reduced oscillatory response and reduced control magnitude. Another advantage of this approach is that it leads to a significant reduction of tuning parameters in the control design process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Past studies use deterministic models to evaluate optimal cache configuration or to explore its design space. However, with the increasing number of components present on a chip multiprocessor (CMP), deterministic approaches do not scale well. Hence, we apply probabilistic genetic algorithms (GA) to determine a near-optimal cache configuration for a sixteen tiled CMP. We propose and implement a faster trace based approach to estimate fitness of a chromosome. It shows up-to 218x simulation speedup over the cycle-accurate architectural simulation. Our methodology can be applied to solve other cache optimization problems such as design space exploration of cache and its partitioning among applications/ virtual machines.